Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
JMIR Mhealth Uhealth ; 10(9): e38364, 2022 09 19.
Article in English | MEDLINE | ID: covidwho-2054780

ABSTRACT

BACKGROUND: Symptom checkers are clinical decision support apps for patients, used by tens of millions of people annually. They are designed to provide diagnostic and triage advice and assist users in seeking the appropriate level of care. Little evidence is available regarding their diagnostic and triage accuracy with direct use by patients for urgent conditions. OBJECTIVE: The aim of this study is to determine the diagnostic and triage accuracy and usability of a symptom checker in use by patients presenting to an emergency department (ED). METHODS: We recruited a convenience sample of English-speaking patients presenting for care in an urban ED. Each consenting patient used a leading symptom checker from Ada Health before the ED evaluation. Diagnostic accuracy was evaluated by comparing the symptom checker's diagnoses and those of 3 independent emergency physicians viewing the patient-entered symptom data, with the final diagnoses from the ED evaluation. The Ada diagnoses and triage were also critiqued by the independent physicians. The patients completed a usability survey based on the Technology Acceptance Model. RESULTS: A total of 40 (80%) of the 50 participants approached completed the symptom checker assessment and usability survey. Their mean age was 39.3 (SD 15.9; range 18-76) years, and they were 65% (26/40) female, 68% (27/40) White, 48% (19/40) Hispanic or Latino, and 13% (5/40) Black or African American. Some cases had missing data or a lack of a clear ED diagnosis; 75% (30/40) were included in the analysis of diagnosis, and 93% (37/40) for triage. The sensitivity for at least one of the final ED diagnoses by Ada (based on its top 5 diagnoses) was 70% (95% CI 54%-86%), close to the mean sensitivity for the 3 physicians (on their top 3 diagnoses) of 68.9%. The physicians rated the Ada triage decisions as 62% (23/37) fully agree and 24% (9/37) safe but too cautious. It was rated as unsafe and too risky in 22% (8/37) of cases by at least one physician, in 14% (5/37) of cases by at least two physicians, and in 5% (2/37) of cases by all 3 physicians. Usability was rated highly; participants agreed or strongly agreed with the 7 Technology Acceptance Model usability questions with a mean score of 84.6%, although "satisfaction" and "enjoyment" were rated low. CONCLUSIONS: This study provides preliminary evidence that a symptom checker can provide acceptable usability and diagnostic accuracy for patients with various urgent conditions. A total of 14% (5/37) of symptom checker triage recommendations were deemed unsafe and too risky by at least two physicians based on the symptoms recorded, similar to the results of studies on telephone and nurse triage. Larger studies are needed of diagnosis and triage performance with direct patient use in different clinical environments.


Subject(s)
Decision Support Systems, Clinical , Emergency Service, Hospital , Physicians , Adolescent , Adult , Aged , Emergency Service, Hospital/organization & administration , Female , Humans , Middle Aged , Surveys and Questionnaires , Triage/methods , Young Adult
2.
J Med Internet Res ; 23(9): e29875, 2021 09 15.
Article in English | MEDLINE | ID: covidwho-1443976

ABSTRACT

BACKGROUND: Digital clinical measures collected via various digital sensing technologies such as smartphones, smartwatches, wearables, ingestibles, and implantables are increasingly used by individuals and clinicians to capture health outcomes or behavioral and physiological characteristics of individuals. Although academia is taking an active role in evaluating digital sensing products, academic contributions to advancing the safe, effective, ethical, and equitable use of digital clinical measures are poorly characterized. OBJECTIVE: We performed a systematic review to characterize the nature of academic research on digital clinical measures and to compare and contrast the types of sensors used and the sources of funding support for specific subareas of this research. METHODS: We conducted a PubMed search using a range of search terms to retrieve peer-reviewed articles reporting US-led academic research on digital clinical measures between January 2019 and February 2021. We screened each publication against specific inclusion and exclusion criteria. We then identified and categorized research studies based on the types of academic research, sensors used, and funding sources. Finally, we compared and contrasted the funding support for these specific subareas of research and sensor types. RESULTS: The search retrieved 4240 articles of interest. Following the screening, 295 articles remained for data extraction and categorization. The top five research subareas included operations research (research analysis; n=225, 76%), analytical validation (n=173, 59%), usability and utility (data visualization; n=123, 42%), verification (n=93, 32%), and clinical validation (n=83, 28%). The three most underrepresented areas of research into digital clinical measures were ethics (n=0, 0%), security (n=1, 0.5%), and data rights and governance (n=1, 0.5%). Movement and activity trackers were the most commonly studied sensor type, and physiological (mechanical) sensors were the least frequently studied. We found that government agencies are providing the most funding for research on digital clinical measures (n=192, 65%), followed by independent foundations (n=109, 37%) and industries (n=56, 19%), with the remaining 12% (n=36) of these studies completely unfunded. CONCLUSIONS: Specific subareas of academic research related to digital clinical measures are not keeping pace with the rapid expansion and adoption of digital sensing products. An integrated and coordinated effort is required across academia, academic partners, and academic funders to establish the field of digital clinical measures as an evidence-based field worthy of our trust.


Subject(s)
Delivery of Health Care , Smartphone , Humans
3.
JMIR Ment Health ; 8(9): e26029, 2021 Sep 15.
Article in English | MEDLINE | ID: covidwho-1409797

ABSTRACT

BACKGROUND: Between 15% and 70% of adolescents report experiencing cybervictimization. Cybervictimization is associated with multiple negative consequences, including depressed mood. Few validated, easily disseminated interventions exist to prevent cybervictimization and its consequences. With over 97% of adolescents using social media (such as YouTube, Facebook, Instagram, or Snapchat), recruiting and delivering a prevention intervention through social media and apps may improve accessibility of prevention tools for at-risk youth. OBJECTIVE: This study aims to evaluate the feasibility and acceptability of and obtain preliminary outcome data on IMPACT (Intervention Media to Prevent Adolescent Cyber-Conflict Through Technology), a brief, remote app-based intervention to prevent and reduce the effect of cyberbullying. METHODS: From January 30, 2020, to May 3, 2020, a national sample of 80 adolescents with a history of past-year cybervictimization was recruited through Instagram for a randomized control trial of IMPACT, a brief, remote research assistant-led intervention and a fully automated app-based program, versus enhanced web-based resources (control). Feasibility and acceptability were measured by consent, daily use, and validated surveys. Although not powered for efficacy, outcomes (victimization, bystander self-efficacy, and well-being) were measured using validated measures at 8 and 16 weeks and evaluated using a series of longitudinal mixed models. RESULTS: Regarding feasibility, 24.5% (121/494) of eligible participants provided contact information; of these, 69.4% (84/121) completed full enrollment procedures. Of the participants enrolled, 45% (36/80) were randomized into the IMPACT intervention and 55% (44/80) into the enhanced web-based resources groups. All participants randomized to the intervention condition completed the remote intervention session, and 89% (77/80) of the daily prompts were answered. The retention rate was 99% (79/80) at 8 weeks and 96% (77/80) at 16 weeks for all participants. Regarding acceptability, 100% (36/36) of the intervention participants were at least moderately satisfied with IMPACT overall, and 92% (33/36) of the participants were at least moderately satisfied with the app. At both 8 and 16 weeks, well-being was significantly higher (ß=1.17, SE 0.87, P=.02 at 8 weeks and ß=3.24, SE 0.95, P<.001 at 16 weeks) and psychological stress was lower (ß=-.66, SE 0.08, P=.04 at 8 weeks and ß=-.89, SE 0.09, P<.001 at 16 weeks) among IMPACT users than among control group users. Participants in the intervention group attempted significantly more bystander interventions than those in the control group at 8 weeks (ß=.82, SE 0.42; P=.02). CONCLUSIONS: This remote app-based intervention for victims of cyberbullying was feasible and acceptable, increased overall well-being and bystander interventions, and decreased psychological stress. Our findings are especially noteworthy given that the trial took place during the COVID-19 pandemic. The use of Instagram to recruit adolescents can be a successful strategy for identifying and intervening with those at the highest risk of cybervictimization. TRIAL REGISTRATION: ClinicalTrials.gov NCT04259216; http://clinicaltrials.gov/ct2/show/NCT04259216.

SELECTION OF CITATIONS
SEARCH DETAIL